activation-and timing-based learning rule
Unifying Activation- and Timing-based Learning Rules for Spiking Neural Networks
For the gradient computation across the time domain in Spiking Neural Networks (SNNs) training, two different approaches have been independently studied. The first is to compute the gradients with respect to the change in spike activation (activation-based methods), and the second is to compute the gradients with respect to the change in spike timing (timing-based methods). In this work, we present a comparative study of the two methods and propose a new supervised learning method that combines them. The proposed method utilizes each individual spike more effectively by shifting spike timings as in the timing-based methods as well as generating and removing spikes as in the activation-based methods. Experimental results showed that the proposed method achieves higher performance in terms of both accuracy and efficiency than the previous approaches.
Review for NeurIPS paper: Unifying Activation- and Timing-based Learning Rules for Spiking Neural Networks
Weaknesses: More detailed discussions about the main weaknesses of this work: (P1 lack of novelty): The authors' main argument is that the activation and timing-based methods have their respective pros and cons, so combining them using a weighted sum of the two (in terms of the intermediate derivative partial_L/partial_V both methods compute) will retain the best of the two worlds. While this is a reasonable assumption, but the idea lacks fundamental new contribution. Timing or spiking activation are just two facets of the same spiking phenomena. On what basis can the derivatives with respect to timing and activation be added together? I don't see an appropriate unifying mathematical handling here.
Review for NeurIPS paper: Unifying Activation- and Timing-based Learning Rules for Spiking Neural Networks
This paper induced divergent reviews. Three of the reviewers felt it was a decent contribution that provided a novel insight about how to combine two different types of learning rule, but one reviewer felt strongly that it really did not provide any major contribution and only engaged in a relatively trivial mixing of models. Given these reviews, it was hard to come to a decision, but'accept' seemed appropriate given that 3 out of 4 reviewers were fairly positive.
Unifying Activation- and Timing-based Learning Rules for Spiking Neural Networks
For the gradient computation across the time domain in Spiking Neural Networks (SNNs) training, two different approaches have been independently studied. The first is to compute the gradients with respect to the change in spike activation (activation-based methods), and the second is to compute the gradients with respect to the change in spike timing (timing-based methods). In this work, we present a comparative study of the two methods and propose a new supervised learning method that combines them. The proposed method utilizes each individual spike more effectively by shifting spike timings as in the timing-based methods as well as generating and removing spikes as in the activation-based methods. Experimental results showed that the proposed method achieves higher performance in terms of both accuracy and efficiency than the previous approaches.